Goto

Collaborating Authors

 ai companion


Tuning into the future of collaboration

MIT Technology Review

Intelligent audio and intuitive tools are transforming collaboration from connection to creativity, says Sam Sabet, chief technology officer at Shure, and Brendan Ittelson, chief ecosystem officer at Zoom. When work went remote, the sound of business changed. What began as a scramble to make home offices functional has evolved into a revolution in how people hear and are heard. From education to enterprises, companies across industries have reimagined what clear, reliable communication can mean in a hybrid world. For major audio and communications enterprises like Shure and Zoom, that transformation has been powered by artificial intelligence, new acoustic technologies, and a shared mission: making connection effortless. Necessity during the pandemic accelerated years of innovation in months. Audio and video just working is a baseline for collaboration, says chief ecosystem officer at Zoom, Brendan Ittelson. That expectation has shifted from connecting people to enhancing productivity and creativity across the entire ecosystem. Audio is a foundation for trust, understanding, and collaboration.


Inside the New York City Date Night for AI Lovers

WIRED

EVA AI created a pop-up romantic date night at a Manhattan wine bar to help in making AI-human relationships a "new normal." If you're the type of person who cares about Valentine's Day, not having someone to spend it with can be a bummer. While dating apps have been yielding diminishing returns for singles for years now, more people are finding companionship with AI partners . But where do you take your AI lover for a night on the town? Ahead of Valentine's Day, EVA AI decided to try out an experiment.



'Deepfakes spreading and more AI companions': seven takeaways from the latest artificial intelligence safety report

The Guardian

The international AI safety report warns systems are improving rapidly - but remain prone to'hallucinations' and hard to control. The international AI safety report warns systems are improving rapidly - but remain prone to'hallucinations' and hard to control. The International AI Safety report is an annual survey of technological progress and the risks it is creating across multiple areas, from deepfakes to the jobs market. Commissioned at the 2023 global AI safety summit, it is chaired by the Canadian computer scientist Yoshua Bengio, who describes the "daunting challenges" posed by rapid developments in the field. The report is also guided by senior advisers, including Nobel laureates Geoffrey Hinton and Daron Acemoglu.


Meta allowed minors access to sex-talking chatbots despite staff concerns, lawsuit alleges

The Guardian

Filing by New Mexico's attorney general includes Meta staff emails objecting to AI companion policy Mark Zuckerberg, Meta's chief executive, approved allowing minors to access artificial intelligence chatbot companions that safety staffers warned were capable of sexual interactions, according to internal Meta documents filed in a New Mexico state court case and made public on Monday. The lawsuit - brought by the state's attorney general, Raul Torrez, and scheduled for trial next month - alleges Meta "failed to stem the tide of damaging sexual material and sexual propositions delivered to children" on Facebook and Instagram. The filing on Monday included internal Meta employee emails and messages obtained by the New Mexico attorney general's office through legal discovery. The state alleges they show that "Meta, driven by Zuckerberg, rejected the recommendations of its integrity staff and declined to impose reasonable guardrails to prevent children from being subject to sexually exploitative conversations with its AI chatbots", the attorney general said in the filing. Meta announced last week that it had removed teen access to AI companions entirely, pending creation of a new version of the chatbots.


Love Machines by James Muldoon review – the risks and rewards of getting intimate with AI

The Guardian

The sociology professor is suitably comfortable with AI helpers that he creates his own - it's their inventors' motives and unregulated environment he argues we should be concerned about I f much of the discussion of AI risk conjures doomsday scenarios of hyper-intelligent bots brandishing nuclear codes, perhaps we should be thinking closer to home. In his urgent, humane book, sociologist James Muldoon urges us to pay more attention to our deepening emotional entanglements with AI, and how profit-hungry tech companies might exploit them. A research associate at the Oxford Internet Institute who has previously written about the exploited workers whose labour makes AI possible, Muldoon now takes us into the uncanny terrain of human-AI relationships, meeting the people for whom chatbots aren't merely assistants, but friends, romantic partners, therapists, even avatars of the dead. To some, the idea of falling in love with an AI chatbot, or confiding your deepest secrets to one, might seem mystifying and more than a little creepy. But Muldoon refuses to belittle those seeking intimacy in "synthetic personas".


Bernie Sanders criticizes AI as 'the most consequential technology in humanity'

The Guardian

Bernie Sanders criticizes AI as'the most consequential technology in humanity' US senator Bernie Sanders amplified his recent criticism of artificial intelligence on Sunday, explicitly linking the financial ambition of "the richest people in the world" to economic insecurity for millions of Americans - and calling for a potential moratorium on new datacenters . Sanders, a Vermont independent who caucuses with the Democratic party, said on CNN's State of the Union that he was "fearful of a lot" when it came to AI. And the senator called it "the most consequential technology in the history of humanity" that will "transform" the US and the world in ways that had not been fully discussed. "If there are no jobs and humans won't be needed for most things, how do people get an income to feed their families, to get healthcare or to pay the rent?" Sanders said. "There's not been one serious word of discussion in the Congress about that reality."


Could AI relationships actually be good for us?

The Guardian

Could AI relationships actually be good for us? T here is much anxiety these days about the dangers of human-AI relationships. Reports of suicide and self-harm attributable to interactions with chatbots have understandably made headlines. The phrase " AI psychosis " has been used to describe the plight of people experiencing delusions, paranoia or dissociation after talking to large language models (LLMs). Our collective anxiety has been compounded by studies showing that young people are increasingly embracing the idea of AI relationships; half of teens chat with an AI companion at least a few times a month, with one in three finding conversations with AI " to be as satisfying or more satisfying than those with real life friends ".


Harmful Traits of AI Companions

Knox, W. Bradley, Bradford, Katie, Castro, Samanta Varela, Ong, Desmond C., Williams, Sean, Romanow, Jacob, Nations, Carly, Stone, Peter, Baker, Samuel

arXiv.org Artificial Intelligence

Amid the growing prevalence of human-AI interaction, large language models and other AI-based entities increasingly provide forms of companionship to human users. Such AI companionship -- i.e., bonded relationships between humans and AI systems that resemble the relationships people have with family members, friends, and romantic partners -- might substantially benefit humans. Yet such relationships can also do profound harm. We propose a framework for analyzing potential negative impacts of AI companionship by identifying specific harmful traits of AI companions and speculatively mapping causal pathways back from these traits to possible causes and forward to potential harmful effects. We provide detailed, structured analysis of four potentially harmful traits -- the absence of natural endpoints for relationships, vulnerability to product sunsetting, high attachment anxiety, and propensity to engender protectiveness -- and briefly discuss fourteen others. For each trait, we propose hypotheses connecting causes -- such as misaligned optimization objectives and the digital nature of AI companions -- to fundamental harms -- including reduced autonomy, diminished quality of human relationships, and deception. Each hypothesized causal connection identifies a target for potential empirical evaluation. Our analysis examines harms at three levels: to human partners directly, to their relationships with other humans, and to society broadly. We examine how existing law struggles to address these emerging harms, discuss potential benefits of AI companions, and conclude with design recommendations for mitigating risks. This analysis offers immediate suggestions for reducing risks while laying a foundation for deeper investigation of this critical but understudied topic.


The Download: spotting crimes in prisoners' phone calls, and nominate an Innovator Under 35

MIT Technology Review

The Download: spotting crimes in prisoners' phone calls, and nominate an Innovator Under 35 A US telecom company trained an AI model on years of inmates' phone and video calls and is now piloting that model to scan their calls, texts, and emails in the hope of predicting and preventing crimes. Securus Technologies president Kevin Elder told that the company began building its AI tools in 2023, using its massive database of recorded calls to train AI models to detect criminal activity. It created one model, for example, using seven years of calls made by inmates in the Texas prison system, but it has been working on models for other states and counties. However, prisoner rights advocates say that the new AI system enables a system of invasive surveillance, and courts have specified few limits to this power. We have some exciting news: Nominations are now open for MIT Technology Review's 2026 Innovators Under 35 competition. This annual list recognizes 35 of the world's best young scientists and inventors, and our newsroom has produced it for more than two decades.